Skip to main content

Overview

Welcome to the DynamoEnhance Fine-Tuning SDK documentation. This SDK provides advanced tools for fine-tuning large language models (LLMs) with differential privacy, ensuring your models protect the sensitive data that they’re trained on.

DynamoEnhance ML SDK

The DynamoEnhance ML SDK is designed to facilitate the fine-tuning of LLMs with state-of-the-art privacy techniques. By integrating differential privacy, the SDK ensures that the privacy of individual data points is preserved during the training process. This approach is particularly beneficial for organizations handling sensitive data, such as personally identifiable information (PII), as it minimizes the risk of data leakage.

Benefits of Fine-Tuning with Differential Privacy

  1. Enhanced Privacy: Differential privacy ensures that the inclusion or exclusion of a single data point has a negligible impact on the overall model, making it difficult to infer specific information about any individual data point.
  2. Regulatory Compliance: Adhering to privacy standards and regulations (such as GDPR and CCPA) is critical. Differential privacy helps meet these requirements by providing strong privacy guarantees.
  3. Trust and Security: By protecting sensitive data during the training process, organizations can build trust with their users and stakeholders, ensuring that data security is a top priority.

SDK Features

  • Differential Privacy Integration: Easily apply differential privacy during the fine-tuning process with a single line of code.
  • Flexible Configuration: Use YAML configuration files to customize training parameters, including privacy settings, model parameters, and dataset configurations.
  • Compatibility: The SDK supports integration with popular libraries such as Transformers, HuggingFace Hub, and LoRA (Low-Rank Adaptation).